Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(5)2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38475092

RESUMO

COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient's state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.


Assuntos
COVID-19 , Pandemias , Humanos , Benchmarking , Cintilografia , Tomografia Computadorizada por Raios X
2.
J Microsc ; 293(1): 38-58, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38053244

RESUMO

Here, we present a comprehensive holography-based system designed for detecting microparticles through microscopic holographic projections of water samples. This system is designed for researchers who may be unfamiliar with holographic technology but are engaged in microparticle research, particularly in the field of water analysis. Additionally, our innovative system can be deployed for environmental monitoring as a component of an autonomous sailboat robot. Our system's primary application is for large-scale classification of diverse microplastics that are prevalent in water bodies worldwide. This paper provides a step-by-step guide for constructing our system and outlines its entire processing pipeline, including hologram acquisition for image reconstruction.

3.
Sensors (Basel) ; 23(9)2023 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-37177764

RESUMO

Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common cancer worldwide and the second most common in women is colorectal cancer (CRC). Grading CRC is a key task in planning appropriate treatments and estimating the response to them. Unfortunately, it has not yet been fully demonstrated how the most advanced models and methodologies of machine learning can impact this crucial task.This paper systematically investigates the use of advanced deep models (convolutional neural networks and transformer architectures) to improve colon carcinoma detection and grading from histological images. To the best of our knowledge, this is the first attempt at using transformer architectures and ensemble strategies for exploiting deep learning paradigms for automatic colon cancer diagnosis. Results on the largest publicly available dataset demonstrated a substantial improvement with respect to the leading state-of-the-art methods. In particular, by exploiting a transformer architecture, it was possible to observe a 3% increase in accuracy in the detection task (two-class problem) and up to a 4% improvement in the grading task (three-class problem) by also integrating an ensemble strategy.


Assuntos
Carcinoma , Neoplasias do Colo , Aprendizado Profundo , Humanos , Feminino , Detecção Precoce de Câncer , Neoplasias do Colo/diagnóstico
4.
Artigo em Inglês | MEDLINE | ID: mdl-36981646

RESUMO

The epidemiology of COVID-19 presented major shifts during the pandemic period. Factors such as the most common symptoms and severity of infection, the circulation of different variants, the preparedness of health services, and control efforts based on pharmaceutical and non-pharmaceutical interventions played important roles in the disease incidence. The constant evolution and changes require the continuous mapping and assessing of epidemiological features based on time-series forecasting. Nonetheless, it is necessary to identify the events, patterns, and actions that were potential factors that affected daily COVID-19 cases. In this work, we analyzed several databases, including information on social mobility, epidemiological reports, and mass population testing, to identify patterns of reported cases and events that may indicate changes in COVID-19 behavior in the city of Araraquara, Brazil. In our analysis, we used a mathematical approach with the fast Fourier transform (FFT) to map possible events and machine learning model approaches such as Seasonal Auto-regressive Integrated Moving Average (ARIMA) and neural networks (NNs) for data interpretation and temporal prospecting. Our results showed a root-mean-square error (RMSE) of about 5 (more precisely, a 4.55 error over 71 cases for 20 March 2021 and a 5.57 error over 106 cases for 3 June 2021). These results demonstrated that FFT is a useful tool for supporting the development of the best prevention and control measures for COVID-19.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , Modelos Estatísticos , Brasil/epidemiologia , Redes Neurais de Computação , Pandemias , Previsões
5.
Med Image Anal ; 86: 102797, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36966605

RESUMO

Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyze this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.


Assuntos
COVID-19 , Pandemias , Humanos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
6.
Sensors (Basel) ; 23(3)2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36772733

RESUMO

Alzheimer's disease (AD) is the most common form of dementia. Computer-aided diagnosis (CAD) can help in the early detection of associated cognitive impairment. The aim of this work is to improve the automatic detection of dementia in MRI brain data. For this purpose, we used an established pipeline that includes the registration, slicing, and classification steps. The contribution of this research was to investigate for the first time, to our knowledge, three current and promising deep convolutional models (ResNet, DenseNet, and EfficientNet) and two transformer-based architectures (MAE and DeiT) for mapping input images to clinical diagnosis. To allow a fair comparison, the experiments were performed on two publicly available datasets (ADNI and OASIS) using multiple benchmarks obtained by changing the number of slices per subject extracted from the available 3D voxels. The experiments showed that very deep ResNet and DenseNet models performed better than the shallow ResNet and VGG versions tested in the literature. It was also found that transformer architectures, and DeiT in particular, produced the best classification results and were more robust to the noise added by increasing the number of slices. A significant improvement in accuracy (up to 7%) was achieved compared to the leading state-of-the-art approaches, paving the way for the use of CAD approaches in real-world applications.


Assuntos
Doença de Alzheimer , Aprendizado Profundo , Humanos , Doença de Alzheimer/diagnóstico por imagem , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem
7.
Sensors (Basel) ; 22(3)2022 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-35161612

RESUMO

Neurodevelopmental disorders (NDD) are impairments of the growth and development of the brain and/or central nervous system. In the light of clinical findings on early diagnosis of NDD and prompted by recent advances in hardware and software technologies, several researchers tried to introduce automatic systems to analyse the baby's movement, even in cribs. Traditional technologies for automatic baby motion analysis leverage contact sensors. Alternatively, remotely acquired video data (e.g., RGB or depth) can be used, with or without active/passive markers positioned on the body. Markerless approaches are easier to set up and maintain (without any human intervention) and they work well on non-collaborative users, making them the most suitable technologies for clinical applications involving children. On the other hand, they require complex computational strategies for extracting knowledge from data, and then, they strongly depend on advances in computer vision and machine learning, which are among the most expanding areas of research. As a consequence, also markerless video-based analysis of movements in children for NDD has been rapidly expanding but, to the best of our knowledge, there is not yet a survey paper providing a broad overview of how recent scientific developments impacted it. This paper tries to fill this gap and it lists specifically designed data acquisition tools and publicly available datasets as well. Besides, it gives a glimpse of the most promising techniques in computer vision, machine learning and pattern recognition which could be profitably exploited for children motion analysis in videos.


Assuntos
Aprendizado de Máquina , Doenças do Sistema Nervoso , Criança , Humanos , Movimento (Física) , Movimento , Software
8.
Environ Res ; 204(Pt D): 112348, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34767822

RESUMO

Since the start of the COVID-19 pandemic many studies investigated the correlation between climate variables such as air quality, humidity and temperature and the lethality of COVID-19 around the world. In this work we investigate the use of climate variables, as additional features to train a data-driven multivariate forecast model to predict the short-term expected number of COVID-19 deaths in Brazilian states and major cities. The main idea is that by adding these climate features as inputs to the training of data-driven models, the predictive performance improves when compared to equivalent single input models. We use a Stacked LSTM as the network architecture for both the multivariate and univariate model. We compare both approaches by training forecast models for the COVID-19 deaths time series of the city of São Paulo. In addition, we present a previous analysis based on grouping K-means on AQI curves. The results produced will allow achieving the application of transfer learning, once a locality is eventually added to the task, regressing out using a model based on the cluster of similarities in the AQI curve. The experiments show that the best multivariate model is more skilled than the best standard data-driven univariate model that we could find, using as evaluation metrics the average fitting error, average forecast error, and the profile of the accumulated deaths for the forecast. These results show that by adding more useful features as input to a multivariate approach could further improve the quality of the prediction models.


Assuntos
Poluição do Ar , COVID-19 , Poluição do Ar/análise , Brasil , Humanos , Umidade , Pandemias , SARS-CoV-2 , Temperatura
9.
Sensors (Basel) ; 21(17)2021 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-34502769

RESUMO

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X
10.
J Imaging ; 7(9)2021 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-34564115

RESUMO

COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state.

11.
J Imaging ; 7(3)2021 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-34460707

RESUMO

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.

12.
Front Psychol ; 12: 678052, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34366997

RESUMO

Several studies have found a delay in the development of facial emotion recognition and expression in children with an autism spectrum condition (ASC). Several interventions have been designed to help children to fill this gap. Most of them adopt technological devices (i.e., robots, computers, and avatars) as social mediators and reported evidence of improvement. Few interventions have aimed at promoting emotion recognition and expression abilities and, among these, most have focused on emotion recognition. Moreover, a crucial point is the generalization of the ability acquired during treatment to naturalistic interactions. This study aimed to evaluate the effectiveness of two technological-based interventions focused on the expression of basic emotions comparing a robot-based type of training with a "hybrid" computer-based one. Furthermore, we explored the engagement of the hybrid technological device introduced in the study as an intermediate step to facilitate the generalization of the acquired competencies in naturalistic settings. A two-group pre-post-test design was applied to a sample of 12 children (M = 9.33; ds = 2.19) with autism. The children were included in one of the two groups: group 1 received a robot-based type of training (n = 6); and group 2 received a computer-based type of training (n = 6). Pre- and post-intervention evaluations (i.e., time) of facial expression and production of four basic emotions (happiness, sadness, fear, and anger) were performed. Non-parametric ANOVAs found significant time effects between pre- and post-interventions on the ability to recognize sadness [t (1) = 7.35, p = 0.006; pre: M (ds) = 4.58 (0.51); post: M (ds) = 5], and to express happiness [t (1) = 5.72, p = 0.016; pre: M (ds) = 3.25 (1.81); post: M (ds) = 4.25 (1.76)], and sadness [t (1) = 10.89, p < 0; pre: M (ds) = 1.5 (1.32); post: M (ds) = 3.42 (1.78)]. The group*time interactions were significant for fear [t (1) = 1.019, p = 0.03] and anger expression [t (1) = 1.039, p = 0.03]. However, Mann-Whitney comparisons did not show significant differences between robot-based and computer-based training. Finally, no difference was found in the levels of engagement comparing the two groups in terms of the number of voice prompts given during interventions. Albeit the results are preliminary and should be interpreted with caution, this study suggests that two types of technology-based training, one mediated via a humanoid robot and the other via a pre-settled video of a peer, perform similarly in promoting facial recognition and expression of basic emotions in children with an ASC. The findings represent the first step to generalize the abilities acquired in a laboratory-trained situation to naturalistic interactions.

13.
Sensors (Basel) ; 21(5)2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33802428

RESUMO

The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Redes Neurais de Computação , Radiografia Torácica , Algoritmos , Humanos , Raios X
14.
Sensors (Basel) ; 20(21)2020 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-33171757

RESUMO

Diatoms are among the dominant phytoplankters in marine and freshwater habitats, and important biomarkers of water quality, making their identification and classification one of the current challenges for environmental monitoring. To date, taxonomy of the species populating a water column is still conducted by marine biologists on the basis of their own experience. On the other hand, deep learning is recognized as the elective technique for solving image classification problems. However, a large amount of training data is usually needed, thus requiring the synthetic enlargement of the dataset through data augmentation. In the case of microalgae, the large variety of species that populate the marine environments makes it arduous to perform an exhaustive training that considers all the possible classes. However, commercial test slides containing one diatom element per class fixed in between two glasses are available on the market. These are usually prepared by expert diatomists for taxonomy purposes, thus constituting libraries of the populations that can be found in oceans. Here we show that such test slides are very useful for training accurate deep Convolutional Neural Networks (CNNs). We demonstrate the successful classification of diatoms based on a proper CNNs ensemble and a fully augmented dataset, i.e., creation starting from one single image per class available from a commercial glass slide containing 50 fixed species in a dry setting. This approach avoids the time-consuming steps of water sampling and labeling by skilled marine biologists. To accomplish this goal, we exploit the holographic imaging modality, which permits the accessing of a quantitative phase-contrast maps and a posteriori flexible refocusing due to its intrinsic 3D imaging capability. The network model is then validated by using holographic recordings of live diatoms imaged in water samples i.e., in their natural wet environmental condition.


Assuntos
Diatomáceas/classificação , Holografia , Aprendizado de Máquina , Microscopia , Redes Neurais de Computação
15.
Artigo em Inglês | MEDLINE | ID: mdl-32679861

RESUMO

The contribution of this paper is twofold. First, a new data driven approach for predicting the Covid-19 pandemic dynamics is introduced. The second contribution consists in reporting and discussing the results that were obtained with this approach for the Brazilian states, with predictions starting as of 4 May 2020. As a preliminary study, we first used an Long Short Term Memory for Data Training-SAE (LSTM-SAE) network model. Although this first approach led to somewhat disappointing results, it served as a good baseline for testing other ANN types. Subsequently, in order to identify relevant countries and regions to be used for training ANN models, we conduct a clustering of the world's regions where the pandemic is at an advanced stage. This clustering is based on manually engineered features representing a country's response to the early spread of the pandemic, and the different clusters obtained are used to select the relevant countries for training the models. The final models retained are Modified Auto-Encoder networks, that are trained on these clusters and learn to predict future data for Brazilian states. These predictions are used to estimate important statistics about the disease, such as peaks and number of confirmed cases. Finally, curve fitting is carried out to find the distribution that best fits the outputs of the MAE, and to refine the estimates of the peaks of the pandemic. Predicted numbers reach a total of more than one million infected Brazilians, distributed among the different states, with São Paulo leading with about 150 thousand confirmed cases predicted. The results indicate that the pandemic is still growing in Brazil, with most states peaks of infection estimated in the second half of May 2020. The estimated end of the pandemics (97% of cases reaching an outcome) spread between June and the end of August 2020, depending on the states.


Assuntos
Betacoronavirus/isolamento & purificação , Infecções por Coronavirus/epidemiologia , Pneumonia Viral/epidemiologia , Brasil/epidemiologia , COVID-19 , Infecções por Coronavirus/virologia , Previsões , Humanos , Pandemias , Pneumonia Viral/virologia , SARS-CoV-2
16.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-32635375

RESUMO

The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder's, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular/instrumentação , Olho , Fixação Ocular , Computadores , Humanos , Redes Neurais de Computação
17.
Artigo em Inglês | MEDLINE | ID: mdl-32349259

RESUMO

Epidemiological figures of the SARS-CoV-2 epidemic in Italy are higher than those observed in China. Our objective was to model the SARS-CoV-2 outbreak progression in Italian regions vs. Lombardy to assess the epidemic's progression. Our setting was Italy, and especially Lombardy, which is experiencing a heavy burden of SARS-CoV-2 infections. The peak of new daily cases of the epidemic has been reached on the 29th, while was delayed in Central and Southern Italian regions compared to Northern ones. In our models, we estimated the basic reproduction number (R0), which represents the average number of people that can be infected by a person who has already acquired the infection, both by fitting the exponential growth rate of the infection across a 1-month period and also by using day-by-day assessments based on single observations. We used the susceptible-exposed-infected-removed (SEIR) compartment model to predict the spreading of the pandemic in Italy. The two methods provide an agreement of values, although the first method based on exponential fit should provide a better estimation, being computed on the entire time series. Taking into account the growth rate of the infection across a 1-month period, each infected person in Lombardy has involved 4 other people (3.6 based on data of April 23rd) compared to a value of R0 = 2.68, as reported in the Chinese city of Wuhan. According to our model, Piedmont, Veneto, Emilia Romagna, Tuscany and Marche will reach an R0 value of up to 3.5. The R0 was 3.11 for Lazio and 3.14 for the Campania region, where the latter showed the highest value among the Southern Italian regions, followed by Apulia (3.11), Sicily (2.99), Abruzzo (3.0), Calabria (2.84), Basilicata (2.66), and Molise (2.6). The R0 value is decreased in Lombardy and the Northern regions, while it is increased in Central and Southern regions. The expected peak of the SEIR model is set at the end of March, at a national level, with Southern Italian regions reaching the peak in the first days of April. Regarding the strengths and limitations of this study, our model is based on assumptions that might not exactly correspond to the evolution of the epidemic. What we know about the SARS-CoV-2 epidemic is based on Chinese data that seems to be different than those from Italy; Lombardy is experiencing an evolution of the epidemic that seems unique inside Italy and Europe, probably due to demographic and environmental factors.


Assuntos
Infecções por Coronavirus/epidemiologia , Coronavirus , Surtos de Doenças , Pandemias , Pneumonia Viral/epidemiologia , Número Básico de Reprodução , Betacoronavirus , COVID-19 , China/epidemiologia , Infecções por Coronavirus/transmissão , Humanos , Itália/epidemiologia , Pneumonia Viral/transmissão , SARS-CoV-2 , Síndrome Respiratória Aguda Grave/epidemiologia , Sicília/epidemiologia
18.
J Imaging ; 6(11)2020 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-34460570

RESUMO

Baggage travelling on a conveyor belt in the sterile area (the rear collector located after the check-in counters) often gets stuck due to traffic jams, mainly caused by incorrect entries from the check-in counters on the collector belt. Using suitcase appearance captured on the Baggage Handling System (BHS) and airport checkpoints and their re-identification allows for us to handle baggage safer and faster. In this paper, we propose a Siamese Neural Network-based model that is able to estimate the baggage similarity: given a set of training images of the same suitcase (taken in different conditions), the network predicts whether the two input images belong to the same baggage identity. The proposed network learns discriminative features in order to measure the similarity among two different images of the same baggage identity. It can be easily applied on different pre-trained backbones. We demonstrate our model in a publicly available suitcase dataset that outperforms the leading latest state-of-the-art architecture in terms of accuracy.

19.
J Imaging ; 6(7)2020 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460655

RESUMO

Knowing an accurate passengers attendance estimation on each metro car contributes to the safely coordination and sorting the crowd-passenger in each metro station. In this work we propose a multi-head Convolutional Neural Network (CNN) architecture trained to infer an estimation of passenger attendance in a metro car. The proposed network architecture consists of two main parts: a convolutional backbone, which extracts features over the whole input image, and a multi-head layers able to estimate a density map, needed to predict the number of people within the crowd image. The network performance is first evaluated on publicly available crowd counting datasets, including the ShanghaiTech part_A, ShanghaiTech part_B and UCF_CC_50, and then trained and tested on our dataset acquired in subway cars in Italy. In both cases a comparison is made against the most relevant and latest state of the art crowd counting architectures, showing that our proposed MH-MetroNet architecture outperforms in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE) and passenger-crowd people number prediction.

20.
Sensors (Basel) ; 18(11)2018 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-30453518

RESUMO

In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.


Assuntos
Face/fisiologia , Expressão Facial , Redes Neurais de Computação , Adolescente , Algoritmos , Transtorno do Espectro Autista/diagnóstico , Criança , Emoções/fisiologia , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...